大型语言模型已经证明了能够在自然语言和编程语言文本上进行条件和生成的能力。这样的模型打开了多语言代码生成的可能性:代码生成模型是否可以将知识从一种语言推广到另一种语言?尽管当代代码生成模型可以生成语义上正确的Python代码,但对它们使用其他语言的能力知之甚少。我们通过提出Multipl-E来促进该主题的探索,这是自然语言到代码生成的第一个多语言平行基准。 Multipl-E扩展了HumaneVal基准(Chen等,2021),以支持另外18种编程语言,涵盖了一系列编程范式和受欢迎程度。我们在Multipl-E:Codex和Incoder上评估了两个最先进的代码生成模型。我们发现,在几种语言上,法典匹配,甚至超过了其在Python上的性能。在多型E中表示的编程语言范围使我们能够探索语言频率和语言功能对模型性能的影响。最后,将代码生成基准分配给新编程语言的多重方法既可扩展又可扩展。我们描述了一种通用方法,可以轻松地增加对新基准和语言的支持。
translated by 谷歌翻译
这项研究探讨了知识组织系统(KOS)中的时间概念漂移和时间对齐。使用1910年国会主题标题,2020快速主题和自动索引进行比较分析。用例涉及90个19世纪的大不列颠百科全书。条目使用两种方法进行索引:1)全文索引; 2)使用1910 LCSH和快速主题的辅助跨学科词汇应用程序(HIVE),使用STANZA,Stanford的NLP工具包上的条件进行了命名实体识别。分析的重点是三个目标:1)确定1910年LCSH输出独有的结果; 2)在当代LCSH中删除的独家集合中的术语,证明了时间概念漂移; 3)探索这些弃用条款的历史意义。结果证实,历史词汇可用于生成过时的主题标题,代表了KOS和历史资源的概念漂移。做出了一种方法上的贡献,证明了如何随着时间的推移研究KOS的变化并改善历史人文资源的情境化。
translated by 谷歌翻译
测量本体论元素之间的距离是任何匹配解决方案的基本组成部分。依靠离散符号操作的基于字符串的距离指标对于浅层句法匹配是臭名昭著的。在这项研究中,我们探索了跨本体概念嵌入的Wasserstein距离度量。 Wasserstein距离度量目标连续空间可以包含语言,结构和逻辑信息。在我们的探索性研究中,我们使用预先训练的单词嵌入式系统FastText来嵌入本体元素标签。我们研究了Wasserstein距离在测量安大略省(块)之间相似性,发现各个元素之间的匹配以及完善上下文信息的匹配项之间的有效性。与AML和Logmap等领先的系统相比,我们对OAEI会议轨道和MSE基准测试的实验实现了竞争成果。结果表明,适用于最佳运输的有希望的轨迹和Wasserstein距离,以改善基于嵌入的无监督本体匹配。
translated by 谷歌翻译
金属有机框架(MOF)是一类模块化的多孔晶体材料,具有巨大的革命性应用,例如储气,分子分离,化学感应,催化和药物输送。剑桥结构数据库(CSD)报告了10,636个合成的MOF晶体,此外还包含CA。114,373个类似MOF的结构。综合数量(加上可能合成的)MOF结构数量庞大,需要研究人员追求计算技术来筛选和分离MOF候选物。在此演示论文中,我们描述了我们在利用知识图方法方面促进MOF预测,发现和综合方面的努力。我们提出了有关(1)从结构化和非结构化来源构建MOF知识图(MOF-KG)的挑战和案例研究,以及(2)利用MOF-KG来发现新知识或缺失知识。
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
Facial action units (FAUs) are critical for fine-grained facial expression analysis. Although FAU detection has been actively studied using ideally high quality images, it was not thoroughly studied under heavily occluded conditions. In this paper, we propose the first occlusion-robust FAU recognition method to maintain FAU detection performance under heavy occlusions. Our novel approach takes advantage of rich information from the latent space of masked autoencoder (MAE) and transforms it into FAU features. Bypassing the occlusion reconstruction step, our model efficiently extracts FAU features of occluded faces by mining the latent space of a pretrained masked autoencoder. Both node and edge-level knowledge distillation are also employed to guide our model to find a mapping between latent space vectors and FAU features. Facial occlusion conditions, including random small patches and large blocks, are thoroughly studied. Experimental results on BP4D and DISFA datasets show that our method can achieve state-of-the-art performances under the studied facial occlusion, significantly outperforming existing baseline methods. In particular, even under heavy occlusion, the proposed method can achieve comparable performance as state-of-the-art methods under normal conditions.
translated by 谷歌翻译
We develop a wall model for large-eddy simulation (LES) that takes into account various pressure-gradient effects using multi-agent reinforcement learning (MARL). The model is trained using low-Reynolds-number flow over periodic hills with agents distributed on the wall along the computational grid points. The model utilizes a wall eddy-viscosity formulation as the boundary condition, which is shown to provide better predictions of the mean velocity field, rather than the typical wall-shear stress formulation. Each agent receives states based on local instantaneous flow quantities at an off-wall location, computes a reward based on the estimated wall-shear stress, and provides an action to update the wall eddy viscosity at each time step. The trained wall model is validated in wall-modeled LES (WMLES) of flow over periodic hills at higher Reynolds numbers, and the results show the effectiveness of the model on flow with pressure gradients. The analysis of the trained model indicates that the model is capable of distinguishing between the various pressure gradient regimes present in the flow.
translated by 谷歌翻译
迄今为止对文本生成的评估主要集中在依次创建的内容上,而不是对文本的改进。但是,写作自然是一个迭代和增量过程,需要在不同的模块化技能上进行专业知识,例如修复过时的信息或使样式更加一致。即便如此,对模型执行这些技能和编辑能力的模型能力的全面评估仍然很少。这项工作介绍了EditeVal:基于指导的,基准和评估套件,该套件利用现有的现有和新数据集自动评估编辑功能,例如使文本更具凝聚力和释义。我们评估了几种预训练的模型,这表明指令和同伴表现最好,但是大多数基准都落在监督的SOTA以下,尤其是在中和和更新信息时。我们的分析还表明,用于编辑任务的常用指标并不总是很好地关联,并且对具有最高性能的提示的优化并不一定带来对不同模型的最强鲁棒性。通过发布此基准和公开可用的排行榜挑战,我们希望在开发能够迭代和更可控制的编辑模型中解锁未来的研究。
translated by 谷歌翻译
准确的交通预测对于智能运输系统至关重要。尽管许多深度学习模型已经达到了最新的1小时交通预测,但长期交通预测跨越多小时仍然是一个重大挑战。此外,大多数现有的深度学习流量预测模型都是黑匣子,提出了与解释性和解释性有关的其他挑战。我们开发了图形金字塔自动构造(X-GPA),这是一种基于注意力的空间 - 速率图神经网络,使用了新型金字塔自相关注意机制。它可以从图表上的长时间序列中学习,并提高长期流量预测准确性。与几种最先进的方法相比,我们的模型可以实现高达35%的长期流量预测准确性。 X-GPA模型的基于注意力的分数提供了基于交通动态的空间和时间解释,这些解释会改变正常与高峰时段的流量以及工作日与周末流量的变化。
translated by 谷歌翻译